You're ready to start unit-testing in a more managed, process-oriented way. How do you do it? Let's say, for example, you have Microsoft Team System in place, and you use it for source control and tracking work items. Obviously, you can do much more with Team System, but let's start with the basics. You've tasted enough unit-testing to feel that everyone can benefit from them. Where would you start?
Here are my thoughts. This is what I would consider the essential setup to start with:
1. Check in whatever tests you already have, and start checking in any new test code. You don't want to lose what you already have. Even if you don't have a continuous integration system in place (see the next step), you can run the tests manually on your (or any) machine anytime.
2. Use a continuous build system. In Team System it's Team Build, or you can use CruiseControl.Net, TeamCity or any other tool. Whenever someone checks in code, the build starts automatically and reports success/failure. Without running the tests, it would just give a compilation status report, which is a good starting point. The entire build process, from beginning to end should be automatic, and should not have anyone intervening, or fixing just one thing to work. Think of it as a pulse of the project. If the build fails, fix it pronto. Don't let the poor team lose it's pulse.
3. Add to the build script (or integration script) running of tests. Any automated test should run within the build cycle. The build report should report the results of the tests.
4. Make a habit of looking at the test report. If the build fails because of test failure, consider it good feedback. It motivates you to do something - fix the test, change it, delete it, or even ignore it or leave it there. You need to be aware of the test results. From there you can improve.
Once you have this setup, you can now start improving other stuff, like say, prioritizing or tracking your progress. In the next posts, I'll discuss what to track, and how to manage your unit-tests within a process.
Most teams vary in their composition. You won't get a team comprised of top-performers most of time. Depending on the size of team you'll have a few technical leaders, and average developers. If you have less than average developers, get rid of them, they will just slow you down.
I'll assume that management wants the process to work. It may not always be the case. In this case, no one blocks the team, and they are given enough time to learn, adjust and refactor their code. If they are lucky, they can get more tools to help them.
The first practice needed in place is code review, and if you can manage it, employ its bigger and better sibling, pair programming. The best way for people to learn is to discuss what their code does. And if you get an experienced developer with a less experienced one, it becomes as guidance for the latter.
Reviewing tests is an excellent way to make sure the code is tested correctly. Now, if you review the tests, that means you have tests to review, which means you're implementing the process. If there are no tests � stop and write them. Experienced developers can tell if the tests are testing too much, if they are too long, and if they make sense at all. And through the code review process, this experience is passed to the less experienced.
Now let's talk conventions. Does the left brace is above the right brace? It is? Really? Well, you've just lost 5 seconds of precious development-time checking that. The team should agree on the conventions that work for them and stick to that. But more important, agree on what's important and helps the development effort. Like, um.. READABILITY.
Tests should be readable. Let's start with naming: What do we test? What's the scenario and what's the expected result? The test name should say that. When it does, you can look at the test code and corroborate that it actually does it. If not, maybe it's a sign you need another test. Or maybe you should rewrite the test completely.
How long is the test in lines of code? Three screens long? Too long. Scrolling up and down does not help readability. Are you testing too much? Separate the tests. Are you setting up too much? Time to learn about testing in isolation. Is it really an integration test? Move it to the integration test suite.
Is your test brittle? Does it know too much about the production code it's testing? Time to rethink. Are we drawing the isolation line in the correct place? Does it mimic the production code, line by line? That's a no-no. You're not testing what you think is testing. Isolate.
Code review is the second best effective practice I know for unit testing implementation. The best is Pair programming. It instills the conventions inside the production code as you go, which is more effective than intermittent code review. So go for it if you can. We do.